Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@wmclark@publishing.social
2024-02-02 11:52:54

Beyond the pale: where are all the films about ‘whiteness’? | Movies | The Guardian theguardian.com/film/2024/feb/

@arXiv_hepex_bot@mastoxiv.page
2024-05-01 07:01:53

First observation of $\Lambda_{b}^{0} \rightarrow \Sigma_c^{(*) } D^{(*)-} K^{-}$ decays
LHCb collaboration, et al.
arxiv.org/abs/2404.19510 arxiv.org/pdf/2404.19510
arXiv:2404.19510v1 Announce Type: new
Abstract: The four decays, $\Lambda_{b}^{0} \rightarrow \Sigma_c^{(*) } D^{(*)-} K^{-}$, are observed for the first time using proton-proton collision data collected with the LHCb detector at a centre-of-mass energy of $13\,\rm{TeV}$, corresponding to an integrated luminosity of $6\,\rm{fb}^{-1}$. By considering the $\Lambda_b^0 \rightarrow \Lambda_c^{ } \overline{D}^0 K^{-}$ decay as reference channel, the following branching fraction ratios are measured to be,
$$\frac{\cal{B} (\Lambda_{b}^{0} \rightarrow \Sigma_{c}^{ } \rm{D}^{-} {K}^{-})}{\cal{B}(\Lambda_{b}^{0} \rightarrow \Lambda_c^{ } \rm \overline{D}^0 {K}^{-})}
= {0.282}\pm{0.016}\pm{0.016}\pm{0.005},
\frac{\cal{B}(\Lambda_{b}^{0} \rightarrow \Sigma_{c}^{* } \rm {D}^{-} {K}^{-})}{\cal{B}(\Lambda_{b}^{0} \rightarrow \Sigma_c^{ } \rm {D}^{-} {K}^{-})}
= {0.460}\pm{0.052}\pm{0.028},
\frac{\cal{B}(\Lambda_{b}^{0} \rightarrow \Sigma_{c}^{ } \rm {D}^{*-} {K}^{-})}{\cal{B}(\Lambda_{b}^{0} \rightarrow \Sigma_c^{ } \rm {D}^{-} {K}^{-})}
= {2.261}\pm{0.202}\pm{0.129}\pm{0.046},
\frac{\cal{B}(\Lambda_{b}^{0} \rightarrow \Sigma_{c}^{* } \rm D^{*-} K^{-})}{\cal{B}(\Lambda_{b}^{0} \rightarrow \Sigma_c^{ } \rm D^{-} K^{-})}
= {0.896}\pm{0.137}\pm{0.066}\pm{0.018},$$
where the first uncertainties are statistical, the second are systematic, and the third are due to uncertainties in the branching fractions of intermediate particle decays. These initial observations mark the beginning of pentaquark searches in these modes, with more data set to become available following the LHCb upgrade.

@NFL@darktundra.xyz
2024-04-01 18:00:22

Browns RB Nyheim Hines says he's 'learned my lessons' following jet ski accident, recovery on track nfl.com/news/browns-rb-nyheim-

@Infrogmation@mastodon.online
2024-04-01 14:51:16

Season's Greetings.
People used to mark April Fool's Day by sending silly postcards.
Now, thanks to AI and the internet, the process is entirely automated, and celebrated every day of the year.
#1April #AprilFools

French postcard, c. 1905. Text "1er Avril" (April 1st). 

Image shows a woman standing beside a river holding 4 fish. Above her head another fish is flying, with wings like an early airplane.
@randombaywatch@mastodon.social
2024-03-01 10:55:00

#DevinoTricoche as Scorch
Season 2 Episode 15 "Sea of Flames"
#RandomBaywatch #lvdlpx #Baywatch

@arXiv_csCL_bot@mastoxiv.page
2024-05-01 06:48:59

Do Large Language Models Understand Conversational Implicature -- A case study with a chinese sitcom
Shisen Yue, Siyuan Song, Xinyuan Cheng, Hai Hu
arxiv.org/abs/2404.19509 arxiv.org/pdf/2404.19509
arXiv:2404.19509v1 Announce Type: new
Abstract: Understanding the non-literal meaning of an utterance is critical for large language models (LLMs) to become human-like social communicators. In this work, we introduce SwordsmanImp, the first Chinese multi-turn-dialogue-based dataset aimed at conversational implicature, sourced from dialogues in the Chinese sitcom $\textit{My Own Swordsman}$. It includes 200 carefully handcrafted questions, all annotated on which Gricean maxims have been violated. We test eight close-source and open-source LLMs under two tasks: a multiple-choice question task and an implicature explanation task. Our results show that GPT-4 attains human-level accuracy (94%) on multiple-choice questions. CausalLM demonstrates a 78.5% accuracy following GPT-4. Other models, including GPT-3.5 and several open-source models, demonstrate a lower accuracy ranging from 20% to 60% on multiple-choice questions. Human raters were asked to rate the explanation of the implicatures generated by LLMs on their reasonability, logic and fluency. While all models generate largely fluent and self-consistent text, their explanations score low on reasonability except for GPT-4, suggesting that most LLMs cannot produce satisfactory explanations of the implicatures in the conversation. Moreover, we find LLMs' performance does not vary significantly by Gricean maxims, suggesting that LLMs do not seem to process implicatures derived from different maxims differently. Our data and code are available at github.com/sjtu-compling/llm-p.

If you want to know why local papers are failing, this is a sign of the times.
The LA Times this morning had no ads. The A section had literally not a single ad, and the B section had a few obituaries and classified ads, but not a single display ad. Not one.
jabberwocking.co…

@NFL@darktundra.xyz
2024-03-02 13:09:23

Texans LB Cashman feeling good coming off breakout season as free agency looms espn.com/nfl/story/_/id/396052

@Techmeme@techhub.social
2024-02-24 16:26:20

In Reddit's S-1, Steve Huffman offers a look at the site's beginnings without naming cofounder Alexis Ohanian, reflecting their schism during the BLM protests (Paresh Dave/Wired)
wired.com/story/reddit-ipo-fil

@NFL@darktundra.xyz
2024-02-02 11:41:57

The reason Bill Belichick is unemployed, plus the lingering worry around Joel Embiid theathletic.com/5246293/2024/0